2,530 research outputs found

    Autonomous Robots and Behavior Initiators

    Get PDF
    We use an autonomous neural controller (ANC) that handles the mechanical behavior of virtual, multi-joint robots, with many moving parts and sensors distributed through the robot’s body, satisfying basic Newtonian laws. As in living creatures, activities inside the robot include behavior initiators: self-activating networks that burn energy and function without external stimulus. Autonomy is achieved by mimicking the dynamics of biological brains, in resting situations, a default state network (DSN), specialized set of energy burning neurons, assumes control and keeps the robot in a safe condition, where other behaviors can be brought to use. Our ANC contains several kinds of neural nets trained with gradient descent to perform specialized jobs. The first group generates moving wave activities in the robot muscles, the second yields basic position/presence prediction information about sensors, the third acts as timing masters, empowering sequential tasks. We add a fourth category of self-activating networks that push behavior from the inside. Through evolutive methods, the composed network share clue information along a few connecting weights, producing self-motivated robots, capable of achieving noticeable self-level of competence. We show that this spirited robot interacts with humans and, through appropriate interfaces, learn complex behaviors that satisfy unknown, subjacent human expectative

    Cascaded encoders for fine-tuning ASR models on overlapped speech

    Full text link
    Multi-talker speech recognition (MT-ASR) has been shown to improve ASR performance on speech containing overlapping utterances from more than one speaker. Multi-talker models have typically been trained from scratch using simulated or actual overlapping speech datasets. On the other hand, the trend in ASR has been to train foundation models using massive datasets collected from a wide variety of task domains. Given the scale of these models and their ability to generalize well across a variety of domains, it makes sense to consider scenarios where a foundation model is augmented with multi-talker capability. This paper presents an MT-ASR model formed by combining a well-trained foundation model with a multi-talker mask model in a cascaded RNN-T encoder configuration. Experimental results show that the cascade configuration provides improved WER on overlapping speech utterances with respect to a baseline multi-talker model without sacrificing performance achievable by the foundation model on non-overlapping utterances

    Lepton masses and mixing without Yukawa hierarchies

    Get PDF
    We investigate the neutrino masses and mixing patten in a version of the SU(3)c⊗SU(3)L⊗U(1)XSU(3)_c\otimes SU(3)_L\otimes U(1)_X model with one extra exotic charged lepton per family as introduced by Ozer. It is shown that an extended scalar sector, together with a discrete Z2Z_2 symmetry, is able to reproduce a consistent lepton mass spectrum without a hierarchy in the Yukawa coupling constants, the former as a carefull balance between one universal see-saw and two radiative mechanisms.Comment: 7 pages, 2 figures, accepted for publication in Phys. Rev. D
    • …
    corecore